Previous Book Contents Book Index Next

Inside Macintosh: Open Transport /
Chapter 3 - Endpoints


Using Endpoints

This section begins by explaining how you create an endpoint and associate it with an address. Next, it introduces the functions you can use to obtain information about endpoints and discusses some issues relating to asynchronous processing that specifically affect endpoint providers. Then, it explains some issues relating to data transfer that apply to all types of endpoint providers. Finally, it describes how you can implement each mode of service.

No matter what mode of service you want to implement, you must read the sections "Opening and Binding Endpoints," "Obtaining Information About Endpoints," "Handling Events for Endpoints," and "Sending and Receiving Data." After you have read these sections, you can read the section describing the mode of service you are interested in implementing. Table 3-5 shows how some of the Open Transport protocols fit with an endpoint's mode of service. For example, if you want to use ATP, you would need to read the section "Using Connectionless Transaction-Based Service," beginning on page 3-39. If you want to use ADSP, you would need to read the section "Establishing and Terminating Connections," beginning on page 3-26 and the section "Using Connection-Oriented Transactionless Service," beginning on page 3-36.
Table 3-5 The Open Transport mode-of-service matrix and some Open Transport protocols
 ConnectionlessConnection-oriented
TransactionlessDDP
PPP
IP
UDP
Serial connection
ADSP
PAP
TCP
Transaction-basedATPASP

Note
The sections that follow present information in such a way as to suggest that communication is always taking place between two Open Transport clients. This does not have to be true. For example, an Open Transport client using a connectionless transactionless DDP endpoint can communicate seamlessly with a client using AppleTalk's DDP protocol and interface. However, because this book is about Open Transport, we always show how communication works between two Open Transport clients.

Opening and Binding Endpoints

Before you can open and bind an endpoint, you must have initialized Open Transport and determined what the endpoint configuration is going to be. Then, you can open and bind the endpoint. You open the endpoint with the OTOpenEndpoint or OTAsyncOpenEndpoint functions. Opening an endpoint with the OTOpenEndpoint function sets the default mode of execution to be synchronous; opening an endpoint with the OTAsyncOpenEndpoint function sets the default mode of execution to be asynchronous. You can change an endpoint's mode of execution at any time by calling the OTSetSynchronous or OTSetAsynchronous function, which are described in the chapter "Providers" in this book.

One of the parameters that you pass to the function used to open the endpoint is a pointer to a configuration structure that Open Transport needs to define the protocol stack providing data transport services. You can use the same configuration for more than one endpoint; however, if you do so, you must use the OTCloneConfiguration function to get a valid copy of the configuration structure. The chapter "Configuration Management," in this book, contains information about creating a configuration structure for an endpoint provider.

If you use the OTAsyncOpenEndpoint function to open an endpoint, you also specify the entry point to a notifier function that the endpoint provider can
use to call your application when an asynchronous or completion event takes place. If you use the OTOpenEndpoint function to open an endpoint, and you want to handle asynchronous events using a notifier function, you must
use the OTInstallNotifier function to install your notifier function. The OTInstallNotifier function is described in the chapter "Providers" in
this book.

Opening an endpoint also sets up a private data structure used by Open Transport to manage the endpoint provider's operations. This data structure contains information about

Some of this information is private; the rest can be retrieved by calling functions that return information about the endpoint. These functions are described in the next section, "Obtaining Information About Endpoints."

When the function you use to open the endpoint returns, it passes back to you an endpoint reference. You must pass this reference as a parameter to any endpoint provider function or any general provider function. For example, you pass this reference as a parameter to the OTBind function, which you must use to bind an endpoint after opening it. Binding an endpoint associates the endpoint with a logical address. Depending on the protocol you use and on your application's needs, you can select a specific address or you can have the protocol choose an address for you. For information about valid address formats, consult the documentation for your protocol. The general rule for binding endpoints is simple: you cannot bind more than one connectionless endpoint to a single address. You can bind more than one connection-oriented endpoint to the same address; for additional information about this possibility, see the section "Processing Multiple Connection Requests" on page 3-29.

No matter what mode of service you need to implement, you must know how to obtain information about the endpoints you have opened and how to handle asynchronous and completion events for these endpoints. These issues are addressed in the next two sections, "Obtaining Information About Endpoints" and "Handling Events for Endpoints." After you read these sections, you can proceed by reading about the mode of service you want to implement.

Obtaining Information About Endpoints

You can use endpoint functions to obtain information about an endpoint's mode of service, state, or address. You can also call general provider functions to determine an endpoint's mode of execution and mode of operation:

The TEndpointInfo structure contains most of the information you need to determine how you can use an endpoint. This structure specifies the maximum size of the buffers you need to allocate when calling functions that return address and option information or data, and it also contains more specific details about the mode of service the endpoint provides. For example, if you have opened a connection-oriented endpoint, the servtype field of the TEndpointInfo structure specifies whether the endpoint supports orderly release. You can obtain a pointer to this structure when you open the endpoint, when you bind the endpoint, or when you call the OTGetEndpointInfo function.

To obtain information about an endpoint's state, you call the OTGetEndpointState function. This function returns a positive integer indicating the endpoint state or a negative integer corresponding to a result code. Table 3-2 on page 3-14 lists and describes endpoint states. If the endpoint is in asynchronous mode and you are not using a notifier function, you might be able to use the OTGetEndpointState function to poll the endpoint and determine whether a specific function has finished executing. The completion of some functions result in an endpoint's changing state. For additional information, see Table 3-3 on page 3-16.

To obtain address information about an endpoint or its peer, you can use one of the following two functions:

For information about the address formats for the protocol you are using, please consult the documentation supplied for the protocol. For information about obtaining the addresses that correspond to a name pattern, see the chapter "Mappers" in this book.

Handling Events for Endpoints

The section about modes of execution in the chapter "Providers" describes the functions you use to determine what a provider's mode of execution is and to change that mode if needed. It also discusses the special problems that might arise in asynchronous processing and recommends ways of handling these problems.

Like other providers, endpoint providers can operate synchronously or asynchronously. When possible, you should use endpoints in asynchronous mode. If you do, you need to create a notifier function that the provider can call to inform you when an asynchronous function has completed or when an asynchronous event has arrived. Event handling for endpoints is basically the same as that described for providers in the chapter "Providers." One slight difference lies in the way the endpoint provider generates T_DATA, T_EXDATA, and T_REQUEST asynchronous events, which signal the arrival of incoming data or of an incoming transaction request. For the sake of efficiency, the provider notifies you just once that incoming data has arrived. To read all the data, you must call the function that clears the event until the function returns with the kOTNoDataErr result. For information about which functions to use to clear these events, see Table 3-8 on page 3-25.

You do not have to issue these calls in the notification routine itself, but until you make the consuming calls and receive a kOTNoDataErr error, another T_DATA, T_EXDATA, or T_REQUEST event will not be issued. You should also be prepared for being notified that data is available, but then receiving a kOTNoDataErr error when trying to read the data.

One exception to this rule occurs when dealing with transaction protocols. When the client gets a T_REPLY event , OTRcvUReply is called until a kOTNoDataErr is returned. If this is deferred from the notification function to the foreground, the following sequence can occur: While the client is busy reading replies in the foreground, a request arrives. This will cause a T_REQUEST event to be generated. If the foreground client was calling OTRcvUReply at this point in time, a kOTLookErr will be generated rather than a kOTNoDataErr. In this case (and the converse case for T_REQUEST events), another T_REPLY event will be generated when a new reply arrives.

If we look at this operationally, the transport provider has a queue of data or commands to deliver to the client. If the queue is empty when the data or command arrives, a notification is delivered to the client. If the queue is not empty, then no notification is delivered to the client at the time the data or command is queued. Instead, whenever the client reads the data or command at the head of the queue, Open Transport peeks at the next element of the queue, if it exists. If this next element of the queue is of the same type as what was at the head of the queue, no event is generated. If there is a difference, a new event is delivered to the client. This new event is typically delivered to the client just prior to returning from the function which removed the head element of the queue.

Not all endpoint functions are affected by an endpoint's mode of execution. Those functions that do behave differently when they are executed asynchronously are listed in Table 3-6. For each function, the table lists the corresponding completion event.
Table 3-6 Endpoint functions that behave differently in synchronous and asynchronous mode
FunctionCompletion event
OTOptionManagementT_OPTIONMANGEMENTCOMPLETE
OTBindT_BINDCOMPLETE
OTUnbindT_UNBINDCOMPLETE
OTAcceptT_ACCEPTCOMPLETE
OTSndRequestT_REQUESTCOMPLETE
OTSndReplyT_REPLYCOMPLETE
OTSndURequestT_REQUESTCOMPLETE
OTSndUReplyT_REPLYCOMPLETE
OTDisconnectT_DISCONNECTCOMPLETE
OTGetProtAddressT_GETPROTADDRCOMPLETE
OTResolveAddressT_RESOLVEADDRCOMPLETE

For compatibility with the XTI standard, Open Transport also includes the endpoint provider function OTLook. You can use the OTLook function

Establishing and Terminating Connections

To implement a connection-oriented service, you must complete the
following steps:

The following sections explain how you establish and terminate a connection. The functions you use to establish and terminate a connection are the same for transactionless as for transaction-based service. The calls you use to transfer data differ depending on which mode of service you choose--transactionless or transaction-based. The section "Using Connection-Oriented Transactionless Service" on page 3-36 explains how you transfer data once you have established a connection. In the text that follows, active peer refers to the endpoint initiating a connection; passive peer refers to the endpoint accepting a connection request.

Before you can use a connection-oriented endpoint to initiate or accept a connection, you must open and bind the endpoint. For example, if you are using AppleTalk, you might open an ADSP endpoint, which offers connection-oriented transactionless service. You don't have to do anything special to bind an endpoint that is intended to be the active peer of a connection. However, when you bind an endpoint intended to be the passive peer of a connection, you must specify a value for the qlen field of the reqAddr parameter for the OTBind function. The qlen field indicates the number of outstanding connection requests that can be queued for that endpoint. Note that the value you specify indicates the desired value. Open Transport might negotiate a lower value, depending upon the number of internal buffers available. The negotiated value of outstanding connection indications is returned to you in the qlen field of the retAddr parameter for the OTBind function.

You are allowed to bind multiple connection-oriented endpoints to a single address. However, only one of these endpoints can accept incoming connection requests. That is, only one endpoint can specify a value for qlen that is greater than 0. For more information, see the section "Processing Multiple Connection Requests" on page 3-29.

Establishing a Connection

You use the following functions to establish a connection:
Active peer callsPassive peer callsMeaning
OTConnectRequests a connection to the passive peer.
OTListenListens for an incoming connection request.
OTAcceptAccepts the connection request identified by the OTListen function. The connection can be accepted by a different endpoint than the one listening for incoming connection requests.
OTRcvConnectReads the status of a pending or completed asynchronous call to the OTConnect function.
OTSndDisconnectRejects an incoming connection request.
OTRcvDisconnectIdentifies the cause of a rejected connection and acknowledges the corresponding disconnection event.

Figure 3-3 illustrates the process of establishing a connection in
synchronous mode.

Figure 3-3 Establishing a connection in synchronous mode

As Figure 3-3 shows, if the active peer is in synchronous mode, the OTConnect function does not return until the connection has been established or the connection attempt has been rejected. If the passive peer has a notifier function installed, the endpoint provider calls it, passing T_LISTEN for the code parameter. The notifier calls the OTListen function, which reads the connection request. The passive peer can now either accept the connection request using the OTAccept function or reject the request by calling the OTSndDisconnect function. The connection attempt might also fail if the request is never received and the endpoint provider times out the call to the OTConnect function.

If the passive peer calls the OTAccept function to accept the connection, the OTConnect function returns with kNoErr. If the passive peer rejects the connection by executing the OTSndDisconnect function or the request is timed out, the OTConnect function returns with kOTLookErr. When the OTConnect function returns, the active peer must examine the result and, depending on the outcome, either begin to transfer data if the function succeeds or call the OTRcvDisconnect function if the function fails. The active peer must call the OTRcvDisconnect function to restore the endpoint to a valid state for subsequent operations. Note that even though the passive peer is in a synchronous state, you can use a notifier function to be called in case of a T_LISTEN event. Alternately, you could also use the OTLook function to poll the passive endpoint for a T_LISTEN event.

If the active peer is in asynchronous mode, the OTConnect function returns right away, and the active peer must rely on its notifier function to determine whether the call succeeded. Figure 3-4 illustrates the process of establishing a connection when the active peer is in asynchronous mode.

Figure 3-4 Establishing a connection in asynchronous mode

The active peer calls the OTConnect function, which returns right away with a code of kOTNoError. The endpoint provider calls the passive peer's notifier, passing T_LISTEN for the code parameter. If the passive peer accepts the connection, the endpoint provider calls the active peer's notifier, passing T_CONNECT for the code parameter.

If the passive peer rejects the connection or if the connection times out, the endpoint provider calls the active peer's notifier, passing T_DISCONNECT for the code parameter. The active peer must then call either the OTRcvConnect function in response to a T_CONNECT event or the OTRcvDisconnect function in response to a T_DISCONNECT event. The endpoint provider, in turn, passes the T_ACCEPTCOMPLETE event back to the passive peer (for a successful connection) or the T_DISCONNECTCOMPLETE event (for a failed connection). The passive peer requires the information provided by these two events to determine whether the connection succeeded.

Sending User Data With Connection or Disconnection Requests

The OTConnect function and the OTSndDisconnect function both pass data structures that include fields for data that you might want to send at the time that you are setting up or tearing down a connection. However, you can only send data when calling these two functions if the connect and discon fields of the TEndpointInfo structure specify that the endpoint can send data with connection or disconnection requests. The amount of data sent must not exceed the limits specified by these two fields. To determine whether the endpoint provider for your endpoint supports data transfer during the establishment of a connection, you must examine the connect and discon fields of the TEndpointInfo structure for the endpoint.

Processing Multiple Connection Requests

If you process multiple connection requests for a single endpoint, you must make sure that the number of outstanding connection requests does not exceed the limit defined for the listening endpoint when you bound that endpoint. An outstanding connection request is a request that you have read using the OTListen function but that you have neither accepted nor rejected. You must also decide whether to accept connections on the same endpoint that is listening for the connections or on a different endpoint.

When you bind the passive endpoint, you must specify a value greater than 0 for the qlen field of the reqAddr parameter to the OTBind function. This value indicates the number of outstanding connections that the provider can queue for this endpoint. Note that Open Transport might negotiate this number to a lower value. If it does, the negotiated value is returned in the qlen field of the retAddr parameter when the OTBind function returns. As you process incoming connection requests, you must check that the number of connections still waiting to be processed does not exceed this negotiated value for the listening endpoint. How you do this depends on the number of outstanding requests and on whether you are accepting connection requests on the same endpoint as the endpoint listening for requests or accepting them on a different endpoint. Connection acceptance is governed by the following rules:

What these rules add up to in practical terms is that if you anticipate managing more than one connection at a time, you should open an endpoint to listen for connections and then open additional endpoints as needed to accept incoming connections. The decision of whether to bind the additional endpoints to the same address or to a different address is affected only by the availability of endpoints to your application.

Terminating a Connection

You can terminate a connection using either an abortive or orderly disconnect. During an abortive disconnect, the connection is torn down without the underlying protocol taking any steps to make sure that data being transferred has been sent and received. When the client calls the OTSndDisconnect function, the connection is immediately torn down, and the client cannot be sure that the provider actually sent any locally buffered data. During an orderly disconnect, the underlying protocol ensures at least that all outgoing data is actually sent. Some protocols go further than this, using an over-the-wire handshake that allows both peers to finish transferring data and agree to disconnect. The following sections describe the steps required for abortive and orderly disconnects.

Using an Abortive Disconnect

You use the OTSndDisconnect and OTRcvDisconnect functions to perform an abortive disconnect. Figure 3-5 illustrates the process for two asynchronous endpoints. The figure shows the active peer initiating the disconnection; in fact, either endpoint can initiate the disconnection.

Figure 3-5 An abortive disconnect

In asynchronous mode, the endpoint initiating the disconnection calls the OTSndDisconnect function. Parameters to the function identify the endpoint and point to a TCall structure that is only of interest if the endpoint provider supports sending data with disconnection requests. To determine whether your protocol does, you must examine the value of the discon field of the TEndpointInfo structure for your endpoint. If you do not want to send data or if you cannot send data to the passive peer, you can set TCall to a NULL pointer.

The endpoint provider receiving the disconnect request calls the passive peer's notifier function, passing T_DISCONNECT for the code parameter. The client must acknowledge the disconnection event by calling the function OTRcvDisconnect. This function clears the event and retrieves any data sent with the event. Parameters to the OTRcvDisconnect function identify the endpoint sending the disconnection and point to a TDiscon structure that is only of interest if the endpoint provider supports sending data with disconnection requests or if the passive peer is managing multiple connections and needs to inform the active peer which of the connections has been closed by using the sequence field of the TDiscon structure. Otherwise, you can set TDiscon to a NULL pointer. When the connection has been closed, the endpoint provider calls the active peer's notifier, passing T_DISCONNECTCOMPLETE for the event parameter. At this time the endpoint is once more in the T_IDLE state.

Using Orderly Disconnects

There are two kinds of orderly disconnects: remote orderly disconnects and local orderly disconnects. The first kind, supported by TCP, provides an over-the-wire (three-way) handshake that guarantees that all data has been sent and that both peers have agreed to disconnect. The second kind, supported by ADSP and most other connection-oriented transactionless protocols, is a locally implemented orderly release mechanism ensuring that data currently being transferred has been received by both peers before the connection is torn down. To determine whether your protocol supports orderly disconnects, you must examine the servtype field of the TEndpointInfo structure for the endpoint. A value of T_COTS_ORD or T_TRANS_ORD indicates that the endpoint supports orderly release. It is safest to assume, unless you know for certain it to be otherwise, that the endpoint supports only local orderly disconnects.

Figure 3-6 shows the steps required to complete a remote orderly disconnect. The figure shows the active peer initiating the disconnection; in fact, either peer can initiate the disconnection.

Figure 3-6 Remote orderly disconnect

The active peer initiates the disconnection by calling the OTSndOrderlyDisconnect function to begin the process and to let the remote endpoint know that the active peer will not send any more data. (Following the call to this function, the active peer can receive data but it cannot send any more data.) The provider calls the passive peer's notifier function, passing T_ORDREL for the code parameter. In response, the passive peer must read any unread data and can send additional data. After it has finished reading the data, it must call the OTRcvOrderlyDisconnect function to acknowledge receipt of the orderly release indication. After calling this function, the passive peer must not attempt to read any more data; however, it can continue to send data. When the passive peer is finished sending any additional data, it must call the OTSndOrderlyDisconnect function to complete its part of the disconnection. Following this call, it cannot send any data. The endpoint provider calls the active peer's notifier, passing T_ORDREL for the code parameter, and the active peer calls the OTRcvOrderlyDisconnect function to acknowledge receipt of the disconnection event and to place the endpoint in the T_IDLE state if this was the only outstanding connection.

Figure 3-7 shows the steps required to complete a local orderly disconnect.

Figure 3-7 A local orderly disconnect

As you can see, the sequence of steps is very similar to that shown in Figure 3-6. The main difference is that the connection is broken as soon as the active peer calls the OTSndOrderlyDisconnect function. As a result, either peer can continue to read any unread data, but neither peer can send data after the initial call to the OTSndOrderlyDisconnect function.

Sending and Receiving Data

This section describes some of the issues that affect send and receive operations inasmuch as these issues affect every type of endpoint. After you read this section, you can read whichever section describes the type of data transfer you are interested in.

Sending Noncontiguous Data

When sending data, you normally use a TNetbuf structure to specify the location and size of the buffer containing the data to be sent. Open Transport also allows you to send data that is not contiguous; however, you need to use a different structure to specify the location of the data fragments in memory. This structure is called the OTData structure.

Figure 3-8 shows how you use OTData structures to describe noncontiguous data. The first structure, myOTD1, contains information about the first data fragment: the fData field contains the starting address of the fragment, and the fLen field contains the length of the fragment. The field fNext contains the address of the OTData structure, myOTD2, which specifies the size and location of the second fragment. In turn, the structure myOTD2, contains the address of the OTData structure that specifies the location and size of the third fragment. You must set the fNext field of the OTData structure used to describe the last data fragment to NULL.

Figure 3-8 Describing noncontiguous data

Sending Data Using Multiple Sends

If you are sending a data unit using multiple sends, you must do the following:

  1. Set the T_MORE bit in the flags field each time you call the function. This lets the provider know that it has not yet read the entire data unit.
  2. Clear the T_MORE bit the last time you call the function. This lets the provider know that the data unit is complete.

Even though you are using multiple sends to send the data, the total size of the data sent cannot exceed the value specified for the tsdu field (for normal data or replies) or etsdu field (for expedited data or requests) of the TEndpointInfo structure for the endpoint.

Sending data using multiple sends does not necessarily affect the way in which the remote client receives the data. That is, just because you have used several calls to a send function to send data does not mean that the remote client must call a receiving function several times to read the data.

Receiving Data

If you are reading data and if the T_MORE bit in the flags field is set, this means that the buffer you have allocated to hold the data is not big enough. You must copy the data you have already received to a different buffer and then call the receive function again to read more data until the T_MORE bit is cleared, which indicates that you have read the entire data unit.

No-Copy Receiving

Open Transport allows you to receive data without doing the extra copying that is normally involved in receiving data, which can save time and resources. For example, you might have received some data that needs to be written to disk and you have four files, each with a different buffer, that are expecting data. Normally what you would do is store the data into a temporary buffer while you determined which of the four files was the right destination. When you identified the target, you'd then copy the data from the temporary buffer into that file's buffer.

A no-copy receive allows you to peek at the data when you receive it and write it out immediately. Open Transport does this by giving you access to a special no-copy receive buffer, OTBuffer. To take advantage of this buffer, it is absolutely crucial that you

WARNING
The no-copy receive buffer is read-only and you must never under any circumstances attempt to write to it. if you write to it, you can crash the system.
You need to release the no-copy receive buffer (with the OTReleaseBuffer function) as soon as you are finished using it so that are not tying up system resources required elsewhere. If you hold onto the buffer, one consequence is that your Ethernet driver starts making its own copies as it receives more data, and if it isn't well designed, it may run out of space and lose packets.

The no-copy receive buffer is actually a linked chain of buffers, with the next buffer pointed to by the fNext field in each buffer. You can access all of the received data by tracing the chain of fNext pointers. For your convenience, Open Transport provides the OTBufferInfo structure and the utility functions, OTReadBuffer and OTBufferDataSize, to read through the OTBuffer structure.

Transferring Data Efficiently

Some protocols support XTI-level options that you can use to change the size of Open Transport's internal send and receive buffers and to change the size of the "low-water mark" that Open Transport uses to determine how much data should accumulate in these buffers before it sends the data or lets the client know that data has arrived. If your protocol supports these options, you can reset these values to fit your application's needs. For more information, see the section describing XTI-level options in the chapter "Option Management" in this book.

Transferring Data Between Transactionless Endpoints

Open Transport defines two sets of functions that you can use to send and receive data. You use one set with connectionless service and the other with connection-oriented service.

Using Connectionless Transactionless Service

You use connectionless transactionless service, as provided by AppleTalk's DDP for example, to send and receive discrete data packets. Most often applications use higher-level protocols that depend, in turn, upon more basic protocols that use connectionless transactionless service. For example, all of AppleTalk's higher-level protocols make use of DDP to send and receive data.

After opening and binding a connectionless transactionless endpoint, you can use three functions to send and receive data:

Either endpoint can send or receive data. However, the endpoint sending data cannot determine whether the other endpoint has actually received the data.

Some endpoints are not able to determine that the specified address or options are invalid until after the data is sent. In this case, the sender's endpoint provider issues the T_UDERR event. You should include code in your notifier function that calls the OTRcvUDErr function in response to this event to determine what caused the send function to fail and to place the sending endpoint in the correct state for further processing.

If the endpoint receiving data has allocated a buffer that is too small to hold the data, the OTRcvUData function returns with the T_MORE bit set in the flags parameter. In this case, you should call the OTRcvUData function repeatedly until the T_MORE bit is cleared.

Using Connection-Oriented Transactionless Service

You use connection-oriented transactionless service, such as provided by ADSP, to exchange full-duplex streams of data across a network. Connection-oriented transactionless endpoints use the OTSnd function to send data and the OTRcv function to receive data. Either endpoint can call either of these functions. Parameters to the OTSnd function identify the endpoint sending the data, the buffer that holds the data, the size of the data, and a flags value that specifies whether the data sent is normal or expedited and whether multiple sends are being used to send the data. Parameters to the OTRcv function identify the receiving endpoint, the area in memory where the data should be copied, the size of the data, and a flags value that specifies whether the client needs to call OTRcv more than once to retrieve the data being sent.

Some endpoints support the use of expedited data, and some support the use of separators to break the data stream into logical units. You need to examine the endpoint's TEndpointInfo structure to determine if the endpoint supports either of these features:

IMPORTANT
Values for the tsdu and etsdu fields of the TEndpointInfo structure that are returned when you open an endpoint might change after the endpoint is connected because the endpoint providers can negotiate different values when establishing a connection. If the endpoint supports variable maximum limits for TSDU and ETSDU size, you should call the OTGetEndpointInfo function after the connection has been established to determine what the current limits are.
To send expedited data, you must set the T_EXPEDITED bit in the flags parameter. If the receiving client is in the middle of reading normal data and the OTRcv function returns expedited data, the next OTRcv that returns without T_EXPEDITED set in the flags field resumes the sending of normal data at the point where it was interrupted. It is the responsibility of the client to remember where that was.

There are several ways of breaking up a data stream into logical size units.

Transferring Data Between Transaction-Based Endpoints

Open Transport defines two sets of functions that you can use to conclude a transaction. One set is defined for connectionless transactions; the other set is defined for connection-oriented transactions. A transaction is a process during which one endpoint, the requester, sends a request for a service. The remote endpoint, called the responder, reads the request, performs the service, and sends a reply. When the requester receives the reply, the transaction is complete.

You can implement applications that use transactions in the following
two ways:

Because one endpoint can conduct multiple transactions at any one time, it is crucial that requesters and responders be able to distinguish one transaction from another. This is done by means of a transaction ID, a number that uniquely identifies a transaction. Because this is not the same number for the requester as it is for the responder, some explanation is required. Figure 3-9 shows how the transaction ID is generated by the requesting application and the provider during the course of a transaction.

Figure 3-9 How a transaction ID is generated

The requester initiates a transaction by sending a request. The requester passes information about the request in a data structure that includes a seq field, which specifies the transaction ID of the request. The requester initializes this field to some arbitrary, unique number. Before sending the request, the endpoint provider saves this number in an internal table and assigns another number to the seq field, which it guarantees to be unique for the requester's machine. The endpoint provider also saves the new number along with the requester-generated sequence number. For example, in Figure 3-9, the requester assigns the number 1001; the endpoint provider assigns the number 5123.

When the responder receives the request, it reads the request information, including the provider-generated sequence number, into buffers it has reserved for the request data. When the responder sends a reply, it specifies the sequence number it read when it received the request.

Before the requester's endpoint provider advises the requester that the reply has arrived, it examines the sequence number of the reply and looks in its internal table to determine which requester-generated sequence number it matches. It then substitutes that number for the sequence number it received from the responder. By using this method Open Transport guarantees that transactions are uniquely identified, and the requester is able to match incoming replies with outgoing requests.

Using Connectionless Transaction-Based Service

You use connectionless transaction-based service, such as provided by ATP, to enable two connectionless endpoints to complete a transaction.

The requester initiates the transaction by calling the OTSndURequest function. Parameters to the OTSndURequest function specify the destination address, the request data, any options, and a sequence number to identify this transaction. The requester must supply a sequence number if it is sending multiple requests, so that later on it can match replies to requests. The requester can cancel an outgoing request by calling the OTCancelURequest function. A requester can implement its own timeout mechanism by installing a Time Manager task and calling the OTCancelURequest function after a specific amount of time has elapsed without a response to the request.

If the responder is synchronous and blocking, the OTRcvURequest function returns after it has read the request. If the responder is asynchronous or not blocking and has a notifier installed, the endpoint provider calls the notifier, passing T_REQUEST for the code parameter. When the responder receives this event, it must call the OTRcvURequest function to read the request. On return, parameters to the OTRcvURequest function specify the address of the requester, option values, the request data, flags information, and a sequence number to identify the transaction. When the responder sends a reply to the request, it must use the same sequence number for the reply. If the responder's buffer is too small to contain the request, the endpoint provider sets the T_MORE bit in the flags parameter. The responder must call the OTRcvURequest function until the T_MORE bit is clear. This indicates that the entire request has been read.

Having read the request, the responder can reply to the request using the OTSndUReply function or reject the request using the OTCancelUReply function. Although the requester is not advised that the responder has rejected a request, it's important that the responder explicitly cancel an incoming request in order to free memory reserved by the OTRcvURequest function.

If the requester is in synchronous blocking mode, the OTRcvUReply function waits until a reply comes in. Otherwise, if a notifier is installed, the endpoint provider calls the notifier, passing T_REPLY for the code parameter. The notifier must call the OTRcvUReply function. On return, parameters to the function specify the address of the endpoint sending the reply, specify option values, flag values, reply data, and a sequence number that identifies the request matching this reply. If the T_MORE bit is set in the flags parameter, the requester has allocated a buffer that is too small to contain the reply data. The requester must call the OTRcvUReply function until the T_MORE bit is clear; this indicates that the complete reply has been read.

If the request is rejected or fails in some other way, the requester receives the T_REPLY event. However, the OTRcvUReply function returns with the result kETIMEDOUTErr. Otherwise, the only useful information returned by the function is the sequence number of the request that has failed.

Figure 3-10 illustrates how connectionless transaction-based endpoints in asynchronous mode exchange data.

Figure 3-10
Data transfer using connectionless transaction-based endpoints in asynchronous mode

Using Connection-Oriented Transaction-Based Service

Connection-oriented transaction-based endpoints allow you to transfer data in exactly the same way as connectionless transaction-based endpoints except that, because the endpoints are connected, it is not necessary to specify an address when using the functions to send and receive requests and replies. The only other difference is that a connection-oriented transaction may be interrupted by a connection or disconnection request.

The section "Using Connectionless Transaction-Based Service," beginning on page 3-39 describes the sequence of functions used to transfer data using a transaction. Figure 3-11 shows the sequence of functions called during a connection-oriented transaction; both requester and responder are in asynchronous mode. This sequence is the same as for connectionless transaction-based service, as shown in Figure 3-10 on page 3-41. Of course, you use different functions to complete these two types of transactions: the names of the functions shown in Figure 3-11 do not include a "U" in the function name.

Figure 3-11
Data transfer using connection-oriented transaction-based endpoints in asynchronous model

For information about how to handle disconnection requests that might occur during a transaction, see "Using Orderly Disconnects," beginning on page 3-31.


Previous Book Contents Book Index Next

© Apple Computer, Inc.
15 AUG 1996